Robust Domain Adaptation: Representations, Weights and Inductive Bias
نویسندگان
چکیده
Unsupervised Domain Adaptation (UDA) has attracted a lot of attention in the last ten years. The emergence Invariant Representations (IR) improved drastically transferability representations from labelled source domain to new and unlabelled target domain. However, potential pitfall this approach, namely presence label shift, been brought light. Some works address issue with relaxed version invariance obtained by weighting samples, strategy often referred as Importance Sampling. From our point view, theoretical aspects how Sampling interact UDA have not studied depth. In present work, we bound risk which incorporates both weights invariant representations. Our analysis highlights role inductive bias aligning distributions across domains. We illustrate it on standard benchmarks proposing learning procedure for UDA. observed empirically that weak makes adaptation more robust. elaboration stronger is promising direction algorithms.
منابع مشابه
Admissible domain representations of inductive limit spaces
In this paper we consider the following separate but related questions: “How do we construct an effective admissible domain representation of a space X from a pseudobasis on X?”, and “How do we construct an effective admissible domain representation of the inductive limit of a directed system (Xi)i∈I of topological spaces, given effective admissible domain representations of the individual spac...
متن کاملBeyond Sharing Weights for Deep Domain Adaptation
The performance of a classifier trained on data coming from a specific domain typically degrades when applied to a related but different one. While annotating many samples from the new domain would address this issue, it is often too expensive or impractical. Domain Adaptation has therefore emerged as a solution to this problem; It leverages annotated data from a source domain, in which it is a...
متن کاملAnalysis of Representations for Domain Adaptation
Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use...
متن کاملLearning Transferrable Representations for Unsupervised Domain Adaptation
Supervised learning with large scale labelled datasets and deep layered models has caused a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers from generalization issues under the presence of a domain shift between the training and the test data distribution. Since unsupervised domain adaptation algorithms directly address this domain shift problem...
متن کاملRobust Speech Translation by Domain Adaptation
Speech translation tasks usually are different from text-based machine translation tasks, and the training data for speech translation tasks are usually very limited. Therefore, domain adaptation is crucial to achieve robust performance across different conditions in speech translation. In this paper, we study the problem of adapting a general-domain, writing-textstyle machine translation syste...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2021
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-030-67658-2_21